[MoE][Common/PyTorch] Add permutation#936
Merged
phu0ngng merged 35 commits intoNVIDIA:mainfrom Aug 22, 2024
Merged
Conversation
a4397d7 to
69efd10
Compare
225a5be to
4b24d86
Compare
Co-authored-by: Qi Zhang <qizhang@nvidia.com> Signed-off-by: Jiang Shao <jiangs@nvidia.com>
4b24d86 to
2b20560
Compare
Contributor
Author
|
Hi @phu0ngng @cyanguwa , this PR is the Permutation fusion operators needed by MoE. cc @QiZhangNV |
Collaborator
|
/te-ci pytorch |
Collaborator
|
Hi @StudyingShao, thanks for putting this work into TE.
|
phu0ngng
reviewed
Jul 1, 2024
phu0ngng
requested changes
Jul 2, 2024
phu0ngng
reviewed
Jul 3, 2024
phu0ngng
reviewed
Jul 3, 2024
phu0ngng
reviewed
Jul 3, 2024
yaox12
reviewed
Jul 4, 2024
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
phu0ngng
requested changes
Jul 26, 2024
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
timmoon10
reviewed
Aug 5, 2024
yaox12
reviewed
Aug 6, 2024
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
42db955 to
6740781
Compare
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Collaborator
|
/te-ci pytorch |
timmoon10
approved these changes
Aug 21, 2024
Collaborator
timmoon10
left a comment
There was a problem hiding this comment.
LGTM once the CI passes.
Signed-off-by: Jiang Shao <jiangs@nvidia.com>
Contributor
Author
|
/te-ci pytorch |
This was referenced Apr 21, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Permutation for fp32/bf16/fp16/fp8 data type. Now PyTorch op only.
Additional descriptions: https://github.com/fanshiqing/moe_grouped_gemm/tree/dev
Type of change
Changes
Please list the changes introduced in this PR:
Checklist: